41 research outputs found
Human activity recognition making use of long short-term memory techniques
The optimisation and validation of a classifiers performance when applied to real
world problems is not always effectively shown. In much of the literature describing
the application of artificial neural network architectures to Human Activity
Recognition (HAR) problems, postural transitions are grouped together and treated as
a singular class. This paper proposes, investigates and validates the development of
an optimised artificial neural network based on Long-Short Term Memory techniques
(LSTM), with repeated cross validation used to validate the performance of the
classifier. The results of the optimised LSTM classifier are comparable or better to
that of previous research making use of the same dataset, achieving 95% accuracy
under repeated 10-fold cross validation using grouped postural transitions. The work
in this paper also achieves 94% accuracy under repeated 10-fold cross validation
whilst treating each common postural transition as a separate class (and thus
providing more context to each activity)
A Novel Workload Allocation Strategy for Batch Jobs
The distribution of computational tasks across a diverse set of geographically distributed heterogeneous resources is a critical issue in the realisation of true computational grids. Conventionally, workload allocation algorithms are divided into static and dynamic approaches. Whilst dynamic approaches frequently outperform static schemes, they usually require the collection and processing of detailed system information at frequent intervals - a task that can be both time consuming and unreliable in the real-world. This paper introduces a novel workload allocation algorithm for optimally distributing the workload produced by the arrival of batches of jobs. Results show that, for the arrival of batches of jobs, this workload allocation algorithm outperforms other commonly used algorithms in the static case. A hybrid scheduling approach (using this workload allocation algorithm), where information about the speed of computational resources is inferred from previously completed jobs, is then introduced and the efficiency of this approach demonstrated using a real world computational grid. These results are compared to the same workload allocation algorithm used in the static case and it can be seen that this hybrid approach comprehensively outperforms the static approach
Deep Learning Meets Cognitive Radio: Predicting Future Steps
Learning the channel occupancy patterns to reuse
the underutilised spectrum frequencies without interfering with
the incumbent is a promising approach to overcome the spectrum
limitations. In this work we proposed a Deep Learning (DL)
approach to learn the channel occupancy model and predict its
availability in the next time slots. Our results show that the
proposed DL approach outperforms existing works by 5%. We
also show that our proposed DL approach predicts the availability
of channels accurately for more than one time slot
Multi-objective evolutionary design of robust controllers on the grid
Coupling conventional controller design methods, model based controller synthesis and simulation, and multi-objective evolutionary optimisation methods frequently results in an extremely computationally expensive design process. However, the emerging paradigm of grid computing provides a powerful platform for the solution of such problems by providing transparent access to large-scale distributed high-performance compute resources. As well as substantially speeding up the time taken to find a single controller design satisfying a set of performance requirements this grid-enabled design process allows a designer to effectively explore the solution space of potential candidate solutions. An example of this is in the multi-objective evolutionary design of robust controllers, where each candidate controller design has to be synthesised and the resulting performance of the compensated system evaluated by computer simulation. This paper introduces a grid-enabled framework for the multi-objective optimisation of computationally expensive problems which will then be demonstrated using and example of the multi-objective evolutionary design of a robust lateral stability controller for a real-world aircraft using H ∞ loop shaping
A Novel Deep Learning Model for the Detection and Identification of Rolling Element-Bearing Faults
Real-time acquisition of large amounts of machine operating data is now increasingly common due to recent advances in Industry 4.0 technologies. A key benefit to factory operators of this large scale data acquisition is in the ability to perform real-time condition monitoring and early-stage fault detection and diagnosis on industrial machinery—with the potential to reduce machine down-time and thus operating costs. The main contribution of this work is the development of an intelligent fault diagnosis method capable of operating on these real-time data streams to provide early detection of developing problems under variable operating conditions. We propose a novel dual-path recurrent neural network with a wide first kernel and deep convolutional neural network pathway (RNN-WDCNN) capable of operating on raw temporal signals such as vibration data to diagnose rolling element bearing faults in data acquired from electromechanical drive systems. RNN-WDCNN combines elements of recurrent neural networks (RNNs) and convolutional neural networks (CNNs) to capture distant dependencies in time series data and suppress high-frequency noise in the input signals. Experimental results on the benchmark Case Western Reserve University (CWRU) bearing fault dataset show RNN-WDCNN outperforms current state-of-the-art methods in both domain adaptation and noise rejection tasks
Optimisation of maintenance scheduling strategies on the grid
The emerging paradigm of Grid Computing provides a powerful platform for the optimisation of complex computer models, such as those used to simulate real-world logistics and supply chain operations. This paper introduces a Grid-based optimisation framework that provides a powerful tool for the optimisation of such computationally intensive objective functions. This framework is then used in the optimisation of maintenance scheduling strategies for fleets of aero-engines, a computationally intensive problem with a high-degree of stochastic noise, achieving substantial improvements in the execution time of the algorithm
Optimisation of maintenance scheduling strategies on the grid
The emerging paradigm of Grid Computing provides a powerful platform for the optimisation of complex computer models, such as those used to simulate real-world logistics and supply chain operations. This paper introduces a grid-based optimisation framework that provides a powerful tool for the optimisation of such computationally intensive objective functions. This framework is then used in the optimisation of maintenance scheduling strategies for fleets of aero-engines, a computationally intensive problem with a high-degree of stochastic noise
Voltage sag estimation in sparsely monitored power systems based on deep learning and system area mapping
This paper proposes a voltage sag estimation approach based on a deep convolutional neural network. The proposed approach estimates the sag magnitude at unmonitored buses regardless of the system operating conditions and fault location and characteristics. The concept of system area mapping is also introduced via the use of bus matrix, which maps different patches in input matrix to various areas in the power system network. In this way, relevant features are extracted at various local areas in the power system and used in the analysis for higher level feature extraction, before feeding into a fully-connected multiple layer neural network for sag classification. The approach has been tested on the IEEE 68-bus test network and it has been demonstrated that the various sag categories can be identified accurately regardless of the operating condition under which the sags occur
A novel method for identification of patients at risk of deterioration using FACS
Facial displays are used by health professionals to assess the wellbeing of patients at risk of deterioration. Surprisingly, there is not a single early warning system based on the assessment of facial expressions. There is ample literature that supports the study of face expressions by means of anatomical based score systems, such as FACS (1). Preliminary studies suggested that outreach nurses identified mostly sadness and fear in patients at risk of deterioration (2). As part of a pilot study on analysing facial expressions in critical illness, this research has compared Action Units (AU in FACS terminology) from patients at risk of deterioration against AU inferred from 20 facial images of patients deemed to die